Truth in a World of Artificial Intelligence

Artificial Intelligence
Accountability, ethics, and law when artificial intelligence takes the wheel

PRAY FIRST for those designing, regulating, and deploying AI to exercise humility, foresight, and compassion.

Give me wisdom and knowledge, that I may lead this people. 2 Chronicles 1:10

Artificial intelligence (AI) is no longer a distant possibility. It’s actively being used in healthcare, hiring systems, courts, and finance. But who is liable when AI makes a mistake or harmful decision? The question of accountability in AI is not just technical but deeply legal, ethical, and civic.

The Role of Government in AI Today

In the U.S., there is no sweeping federal law that regulates all AI development and deployment. Instead, the government leans on existing statutes, agency guidelines, and executive actions. A Congressional Research Service report notes that “no federal legislation establishing broad regulatory authorities for the development or use of AI or prohibitions on AI has been enacted”—most AI policy remains fragmentary or voluntary.

The White House has issued executive orders, such as Removing Barriers to American Leadership in Artificial Intelligence, to guide federal agencies in adopting AI and balancing innovation with risk management. Agencies like the Department of Justice, Department of Homeland Security, and Commerce are individually exploring structures and regulatory roles.

Because comprehensive federal rules are lacking, states have stepped in. Nearly every state has introduced legislation related to AI, covering transparency, disclosures, or banned uses. This mixed approach has raised concerns about inconsistency but also displayed the appetite for AI governance in the absence of federal standards.

Should AI Be Held to Human Standards?

If an AI system is hiring and rejects a candidate, or a medical AI gives a wrong diagnosis, should it be judged by the same standards as a human? Many argue yes: decisions with life or livelihood impact should be explainable, contestable, and non-discriminatory. However, the challenge is that AI often reasons in opaque ways—its “logic” may not map to human rules or thinking.

Because AI lacks consciousness, applying human standards of intention or negligence becomes complex. Instead, accountability might rest on the system’s design, validation, and deployment—holding developers, organizations, and companies responsible for due diligence, testing, and oversight.

Who is Responsible When AI Causes Harm?

When an AI system causes injury, harm, or discrimination, legal responsibility may fall on the developer, the user, or both. Developers (or model providers) could be liable for flawed training data, biased algorithms, or failure to anticipate known risks. Users (companies or institutions applying the AI) could be liable if they misapply the system or neglect oversight. In many cases, joint liability is likely.

Courts have started to address this via product liability, negligence, or contract law theories. Nevertheless, AI’s novelty means that legal precedents are sparse, and scholars are urging clearer statutes or regulatory rules to handle these emerging cases.

Building Transparency and Fairness

To hold AI accountable, transparency must be demanded, which could mean:
– Explanations – AI decisions come with human-readable justifications
– Audit trails – Logs of decision process for review
– Open testing & benchmarking – Making performance data available to third parties
– Model cards and data statements – Documentation of training data, limitations, known biases
These tools help expose bias or errors and enable recourse.

Ethical Stakes When Machines Decide

Putting machines in charge of decisions affecting human lives raises deep ethical questions. Can we entrust moral judgments to algorithms? Should humans remain in the loop for decisions of great weight, like criminal sentencing or life-saving medical triage?

Machinery can amplify existing biases. If training data reflect historical discrimination, AI may perpetuate it. The asymmetry of power is real, and those harmed may have little visibility into the decision logic or ability to challenge it.

The U.S. vs. the World on AI Governance


The European Union is currently farther ahead in terms of laws and AI regulations. Its AI Act proposes a risk-based approach that categorizes systems by severity and imposes stricter controls on “high risk” AI, including mandatory transparency, human oversight, and conformity assessments. By contrast, the U.S. currently relies heavily on voluntary standards and agency guidance.

Some experts have called for creating a dedicated federal AI oversight agency to fill this gap and provide coherence. Such an agency could issue binding rules, certify AI systems, and enforce penalties for misuse—bringing the U.S. more in line with regulatory models in the EU and beyond.

Cybersecurity and Data in AI

AI is deeply intertwined with cybersecurity. The same systems that analyze data may also be targets. The U.S. faces escalating cyber threats—public authorities, ransomware attacks, supply-chain intrusions. Government agencies coordinate with private companies, though tensions arise over data access and privacy.

Some would argue the government should be permitted conditional access to private data during emergencies, but with strong checks, to detect large-scale threats. U.S. privacy laws (e.g. FISA reforms, state privacy statutes) attempt to balance civil liberties and national security. Past breaches (like OPM, Equifax, or health-sector hacks) reveal the danger when security fails.

Many state and local governments are under-resourced to defend against advanced cyber threats. Meanwhile, public education in online safety habits is far behind. People need to understand simple protections—multi-factor authentication, software updates, and phishing awareness as first lines of defense.

Misinformation, Speech, and Governance

AI can greatly amplify misleading or fake information by generating realistic text, video, or audio forgeries (deepfakes). Governments face pressure to act yet must tread carefully to protect free speech. The U.S. has no centralized “truth police.” Instead, laws focus on libel, fraud, and election interference. Scholars have debated whether social media platforms should be treated like utilities with regulatory obligations.

One idea is to require clear labeling for AI-created content while protecting open dialogue. Democracies must walk a careful line—defending truth without diminishing free expression.

Why It Matters and How We Can Respond

Decisions about who gets a job, who receives care, or who faces legal consequences hold lasting significance. Technology may evolve, but the worth of every person does not. As James 2:8 reminds us, “If you really fulfill the royal law according to the Scripture, ‘You shall love your neighbor as yourself,’ you are doing well.” When AI is used to serve others, it should reflect that same love and never diminish it.

We should care about this issue because the choices we make today will influence how future generations live, work, and are treated. Remaining passive allows others to define what justice and compassion look like in a digital age. Instead, we can seek to shape systems that honor people above profit and truth above convenience.

Ultimately, technology mirrors the hearts of those who build and guide it. AI can become a tool for helping many, furthering division, or a means of mercy, depending on our intent. As followers of Christ, we can and should model discernment, humility, and hope—reminding a data-driven world that wisdom begins not in code, but in character.

HOW THEN SHOULD WE PRAY:  

— Pray for protection against unseen harm and that algorithms do not inflict unseen injustice. Set a guard, O Lord, over my mouth; keep watch over the door of my lips. Psalm 141:3
— Pray for Understanding in AI regulation and communication so people know their rights, and for reconciliation when technology hurts trust. But the wisdom from above is first pure, then peaceable, gentle, open to reason, full of mercy and good fruits, impartial and sincere. James 3:17

CONSIDER THESE ITEMS FOR PRAYER:     

  • Pray for lawmakers monitoring the advancing technology of AI to be independent of any outside influences as they consider how to govern and if or what regulations are needed.
  • Pray for courage among whistleblowers, auditors, and civil society to speak truthfully during the creation and implementation of AI systems.
  • Pray for the technologists and policymakers to be mindful of public needs and human dignity.

Sources: Library of Congress, Congressional Research Service, White & Case LLP, Morgan Lewis, IAPP, Washington Post, Brennan Center for Justice, World Economic Forum, The Munich Security Conference, White House

RECENT PRAYER UPDATES


Back to top
FE3